XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Andrew Saxe

 

Tuesday 28th May 2019

 

Time:4.00pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Principles of learning in distributed brain networks

 

The brain is an unparalleled learning system. It is also an intricately interconnected network of brain areas, which might make learning difficult: a local change to one area will have rippling consequences when propagated through the rest of the network. How might the structure of the brain impact its learning dynamics? Answering this question requires theoretical advances that shine light into the black box of neuronal networks, and detailed empirical predictions for specific experimental paradigms. To understand the specific ramifications of layered structure, I develop the theory of learning in deep linear neural networks. The theory answers questions such as how learning speed scales with depth, and why overparametrized neural networks can generalize well from limited data. Based on this, I will describe implications for three experimental domains. First, I will describe a theory of human semantic development and the acquisition of richly structured knowledge. Next, I will describe a theory of perceptual learning in the brain’s visual hierarchy. And finally, I will discuss learning dynamics for noisy perceptual decisions in networks with recurrent connections. Overall, these results suggest that the brain’s structure may powerfully sculpt learning dynamics. By extending our mathematical tool kit for analyzing learning dynamics in complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of learning complex behaviors.

Bio: Dr. Andrew Saxe is a Postdoctoral Research Associate in the Department of Experimental Psychology, University of Oxford working with Christopher Summerfield and Tim Behrens. He was previously a Swartz Fellow at Harvard University with Haim Sompolinsky. He completed his PhD in Electrical Engineering at Stanford University, advised by Jay McClelland, Surya Ganguli, Andrew Ng, and Christoph Schreiner. His dissertation received the Robert J. Glushko Dissertation Prize from the Cognitive Science Society. His research focuses on the theory of deep learning and its applications to phenomena in neuroscience and psychology.